Memory Block Relocation in Cache-Only Memory Multiprocessors
نویسندگان
چکیده
COMA machine is similar to that in a traditional shared memory machine, there are a few aspects that differentiate the AMs from the cache memory in traditional cache-coherent multiprocessors [8]. One important aspect unique to COMA is that the backing store of the AMs in a COMA machine is disks of secondary storage. So, unlike a traditional multiprocessor cache, write-back to the backing store at memory block replacement needs to be avoided. This creates a unique problem in a COMA, the needs of relocating a memory block to a remote node if the block to be replaced is the only valid copy in the AMs.
منابع مشابه
Unallocated Memory Space in COMA Multiprocessors
Cache only memory architecture (COMA) for distributed shared memory multiprocessors attempts to provide high utilization of local memory by organizing the local memory as a large cache, called attraction memory (AM), without traditional main memory. To facilitate caching of replicated data, it is desirable to have some of the physical storage space in the AMs left unallocated, i.e. not utilized...
متن کاملCoherence and Replacement Protocol of DICE-A Bus-Based COMA Multiprocessor
As microprocessors become faster and demand more bandwidth, the already limited scalability of a shared bus decreases even further. DICE, a shared-bus multiprocessor, utilizes cache only memory architecture (COMA) to effectively decrease the speed gap between modern high-performance microprocessors and the bus. DICE tries to optimize COMA for a shared-bus medium, in particular to reduce the det...
متن کاملImproving the Data Cache Performance of Multiprocessor Operating Systems
Bus-based shared-memory multiprocessors with coherent caches have recently become very popular. To achieve high performance, these systems rely on increasingly sophisticated cache hierarchies. However, while these machines often run loads with substantial operating system activity, performance measurements have consistently indicated that the operating system uses the data cache hierarchy poorl...
متن کاملEvaluation of Design Alternatives for a Directory-Based Cache Coherence Protocol in Shared-Memory Multiprocessors
In shared-memory multiprocessors, caches are attached to the processors in order to reduce the memory access latency. To keep the memory consistent, a cache coherence protocol is needed. A well known approach is to record which caches have copies of a memory block in a directory and only notify the caches having a copy when a processor modifies the block. Such a protocol is called a directory-b...
متن کاملSwitch MSHR: A Technique to Reduce Remote Read Memory Access Time in CC-NUMA Multiprocessors
A remote memory access poses a severe problem for the design of CC-NUMA multiprocessors because it takes an order of magnitude longer than the local memory access. The large latency arises partly due to the increased distance between the processor and remote memory over the interconnection network. In this paper, we develop a new switch architecture, called Switch MSHR (SMSHR), which provides t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1995